perm filename APPROX[W80,JMC] blob sn#501962 filedate 1980-03-17 generic text, type C, neo UTF8
COMMENT ⊗   VALID 00005 PAGES
C REC  PAGE   DESCRIPTION
C00001 00001
C00002 00002	Notes on Approximate Theories
C00010 00003	cartesian product counterfactuals (tree extensions)
C00014 00004	.cb Partially Meaningful Concepts with Applications to Counterfactuals
C00017 00005	.cb Counterfactuals in Cartesian Product Representations
C00018 ENDMK
C⊗;
Notes on Approximate Theories

	Approximate theories, e.g. those whose concepts do not admit
precise definition in terms of the state of the world, have states
that only partially describe the world.  The causal laws of such
theories sometimes do not prescribe successors or prescribe successor
states less defined than the initial states.  It may be a
consequence of a theory that after some time, the theory will no
longer apply or at least that some of its objects may go away.

	An automata theoretic example might clarify the concepts.
It would be especially interesting to treat configurations in a
cellular automaton system as moving objects.

On Scott's advice

%3Scott, Dana (1970)%1: "Advice on Modal Logic" in %2Philosophical
Problems in Logic - Some Recent Developments%1, Karel Lambert (ed.),
D. Reidel Publishing Company.

	Scott's idea of indices (worlds) (points of view) seems usable.
Necessity-like operators are to be obtained by universally quantifying
over indices having a given relation to the %2referring%1 index.
Contrary to his advice, I favor introducing the indices explicitly
into a first order formalism.

	If we are to treat approximate theories, then we must
be sophisticated about what entities exist in related indices.  Thus
some indices will have entities like causes and wishes but lack
determinism.

	For cross-index correspondences, we can use names of entities and
have a relation %2corr(x1,x2,i1,i2)%1, asserting that entity ⊗x1 in index
⊗i1 corresponds to ⊗x2 in ⊗i2, where the %2x%1's must be regarded as names
of names or concepts or some kind of intensional entity.  While rigid
designators can easily be declared in this formalism, a major interest
will lie in entities whose meaningfulness is restricted to a few indices.
This correspondence relation should may have to apply to predicates and
functions as well as to individuals.

	I don't know whether Scott proposes that theories define his
set ⊗V of virtual entities, but anyway this won't be possible in the
cases of interest to us.

EXAMPLES

	Here are some examples I would like to treat, but I am not sure
what we want to say, let alone how to express it formally.

.item←0
	#. Physical objects have locations, and smaller objects may be
located on parts of the larger objects or constitute parts of larger
objects.  It doesn't make sense to ask the address of Washington, D.C.
We need to elaborate the argument that we cannot define the location
of an extended object as the position of its center of mass.  This
assumes more physics than a user of the concept usually knows, and if
you take general relativity seriously, it isn't precisely defined
anyway.  One could try to define a hierarchy of location sizes, but
again this isn't epistemologically plausible, i.e. it isn't plausible
that people have such a hierarchy as part of their common sense knowledge.

	#. The U.S. disapproves of the Soviet invasion of Afghanistan.
While this corresponds to the disapproval of many individuals in the
U.S., it is a distinct entity.  Namely, some theories of the international
behavior can use concepts of the opinions of nations to draw certain
conclusions without requiring a catalog giving for each nation a list
of the individuals (or their positions) whose disapproval constitutes
disapproval by the nation.  Ascribing a mental attitude to a nation
does not justify fully personifying the nation and ascribing to it
all the mental qualities of an individual person.  Common sense usage
doesn't do it.  It would be interesting to try to formalize just how
far this ascription can go.  Certainly it is used as far as ascribing
goals, beliefs and actions and supposing that certain of the actions
taken are in accordance with beliefs that they are appropriate ways
of achieving the goals.  However, these national mental attitudes
"interact" with those of individuals or groups at the very next
level of analysis, and mixed levels wherein national, group and
individual mental qualities are discussed are used in common sense
reasoning.  Of course, it may turn out that ascription of attitudes
to nations is not useful as soon as any kind of concrete analysis
is required.  It might be the most appropriate way for an Afghan
who cannot name even one American to express what he knows but
inappropriate for anyone with greater knowledge.
A sophisticated American policy analyst, who might have no reason
to say that the U.S. is angry at the S.U. might still ascribe this
belief to Afghans.

	When we use indices to represent points of view, a problem
will arise as soon as we need to express people's beliefs about
points of view.  This problem arises in any formalism.

	From the AI point of view, the chief problems with approximate
theories are what kinds of reasoning (in particular predictions) they
permit.  Existence as a predicate will be important.

"Society A permits adultery, while society B doesn't" as an
approximate concept.

Because of the finiteness of life, utility is an approximate concept.
cartesian product counterfactuals (tree extensions)
causal systems

natural versus conventional co-ordinates
what is the best non-causal counterfactual

circumscription
giving up identity conditions
second order definitions

	intensional
an approximate system without any intensional extensional distinction

The philosophers are too ready to assume that there is only one
useful concept covering an area of experience.  Just one example
is the argument about whether one sees the dog or a dog-appearance.
We are not obliged to take one as more fundamental than the other,
and common sense admits both.  Common sense is also capable of
quite new interpretations when it becomes useful to introduce
them.  This is fuzzy, because I don't remember the most striking
examples.

When we describe the beliefs of dogs we may use any formalism
that is convenient, i.e. sentences in French, first order logic,
concepts as objects.

Some results can be obtained by taking ordinary common sense
concepts as theory-dependent.  While many philosophers try to
give direct definitions, they aren't surprised or impressed when
someone says they are terms in rather complex theories.

Abstract models whose applicability is to be examined after their
preperties are determined.

examples
if he had bent his knees he wouldn't have fallen

Consider a computer program equipped with a television camera
and operating as follows.  When an image comes in programs look
for objects in the scene.  If they find them, they store sentences
about the objects and their locations and attitudes in memory.
The program reasons with these and other sentences.  When
asked a question it can only use these sentences to answer.
It has no access to the TV image which is not stored after
it has been used for object recognition.  Such a program
sees dogs or at least dog appearances and doesn't see
brown patches.
.cb Partially Meaningful Concepts with Applications to Counterfactuals

	Many useful common sense concepts seem to dissolve when examined
closely.  Attempts to define them formally seem to require making
distinctions that are unheard of by ordinary users of the concepts.
Even after the distinctions are made, examples crop up that require
further distinctions.  Our thesis is that this does not prevent
us from treating such concepts formally.

	The proper domain of such a %2partially
meaningful concept%1 is an %2approximate theory%1 of some aspect of the
world.  The approximate theory is useful because it relates observations
that humans (or computer programs) can perform, general facts about the
world, facts about particular situations, actions that can be performed,
and the new situations that result from these actions.  However, when we
try to reduce the theory to science, for example physics, we may discover
that its concepts cannot be defined in terms of what we
know about the physical world.  Often they cannot even be axiomatized
in a manner consistent with our knowledge of the physical world.

	The examples I can give all involve counterfactual conditional
statements, and many of them involve causal systems.  I don't know
whether this is an essential feature.

.cb Counterfactuals in Cartesian Product Representations

	Suppose a system has a state space ⊗S represented as
a cartesian product

!!b1:	%2S = S↓1qxS↓2qxS↓3%1,

where we have chosen three components for no particular reason.
Let %2x = (x↓1,x↓2,x↓3)%1 be a variable taking values
in the space, and let      a point in the space.  The counterfactual
conditional "if %2x↓2%1 were 5" has an obvious meaning.